Most algorithms and programs that draw the Mandelbrot set use the simple iteration of z^{2}+c → z starting with z=0, until the absolute value of z exceeds 2. That is very fast and simple method, but it has one big disadvantage: you can never be sure that a given point really is a part of the Mandelbrot set, or it just haven't escaped yet.

My goal is to create a procedure that, given a point in the complex plane, runs until it decides whether the point is or is not a part of the Mandelbrot set. This will perhaps be slower than the common algorithm, but, due to automatic adaptation of the iteration count, will show every detail of the set.

One obvious way we can be sure that a point will never escape, is if the iterations of z^{2}+c → z enter a cycle. Then they will repeat this cycle forever. However, exact cycle is only entered by a few points inside the set. Other points just get closer and closer to a cycle. Since we use a finite precision (at least the memory of the computer is finite anyway), they will eventually converge to an exact cycle. That, however, may take many iterations.

So, how can we know if a series of iterations (named "main iterations" hereafter) converges to a cycle?

First, it's relatively easy to find the cycle, if we know the length of the cycle. Time for some math:

z_{0} = P_{c}^{οp}(z_{0})

That says that if we apply the map

P_{c}(z) = z^{2}+c → z |

P

P

P

…

So, we can use Newton or Laguerre or any other iterative method to find the roots of P

Now, let's have a look at the equation z_{0} = P_{c}^{οp}(z_{0}) from a different view. We'll simplify the following discussion from complex to real numbers, but the arguments hold for complex numbers as well.

Let's paint successive iterations of P_{c}^{οp}(z) → z to a graph, starting with some value of z. Red, vertical, lines show the evaluation of P_{c}^{οp}(z) (or any other function); the black, horizontal, lines show the plugging of the output back to the input.

You can see that cases b and c converge to a solution, while a and d do not. Why is it so? Actually, it's because the graph of the function (the blue graph) is not decreasing nor increasing too fast (its angle is between -45° and +45°).

Mathematically speaking, the absolute value of the derivative of P_{c}^{οp}(z) with respect to z is below 1. This last result also holds for the case of complex numbers. So, if we can check that the absolute value of the derivative of P_{c}^{οp}(z) is below 1 everywhere between the current main iteration and the final cycle value z_{0}, we know that the main iterations will eventually converge to that cycle. That's the heart of the algorithm.

How can we make sure that on a given interval, a function never exceeds a given bound? One simple way would be to evaluate the function many times and check if the bound is exceeded. That is obviously a wrong way since we don't know which "many" would be enough. Besides, it might be quite slow, since we are actually working with complex numbers. While sampling an interval in real numbers takes, let's say, 10 evaluations, sampling that same interval in complex numbers with the same precision takes 79 evaluations (we're sampling a disc instead of a line).

There is another way, called interval arithmetic. It does not give precise results but upper bounds, so if the result of the interval arithmetic fits within the bounds, we know that the original result does as well. So, at the cost of precision, we have reliability.

There are many ways to perform interval arithmetic in complex numbers. One simple but efficient, and quite precise, way is to store complex not just as a+b⋅i (i being the imaginary unit), but as a+b⋅i+e⋅D, D being a unit disc, and e a real number. What it means that a number is not a point but a filled disc of radius e centered at a+b⋅i. The rules for addition, subtraction and multiplication of such numbers are simple: the real and imaginary parts of the result are the same as if we were working with points. The resulting radius for addition and subtraction is the sum of the radii of the operands; the radius of the result of multiplication is

e=e_{1} |Z_{2}| + e_{2} |Z_{1}| + e_{1} e_{2}

where Z_{1} and Z_{2} are the operands, Z_{1}=a_{1}+b_{1}⋅i+e_{1}⋅D; |Z| is the absolute value of the center of Z, |Z|=√(a^{2} + b^{2}).

So, if we take a disc, centered at the final cycle value z_{0} and covering the current main iteration value, and evaluate the derivative of P_{c}^{οp}(z) at the disc using interval arithmetic, we have upper bounds for the derivative anywhere on that disc. If the absolute value of these bounds does not exceed 1, we know the absolute value of the derivative does not exceed 1 either. That means that the iterations of P_{c}^{οp}(z) → z will converge to some fixed point, and we know it's z_{0}.

Because interval arithmetic is very conservative, it will often happen that the interval arithmetic will say the derivative does not fit but in reality, it does. There are two ways to deal with that: first, we can try to use a more precise model for a complex number interval than a disc, and second, we can just live with it. If the derivative does not fit, just keep iterating the main loop and later, check the convergence again. Since the main iterations are converging towards the attractor cycle, the checked radius will get smaller and smaller, until the imprecise bounds fit eventually.

I've tried to improve the interval arithmetic precision with little success, so this is still the weakest part of the algorithm as a whole.

The last thing we need to describe, and the first we need to find, is the period candidate. From what we've seen, it's clear that if we make a mistake guessing the period, the derivative check will fail. Thus it is not critical that we find the period exactly everytime, we just need to be sure we'll have the right guess from time to time.

The simplest way is the Floyd's algorithm, modified to work with distance instead of an exact match:

- Perform two iterations of the main loop: z ← (z
^{2}+c)^{2}+c - Perform one iteration of the main loop: h ← h
^{2}+c - Check if the distance between z and h is lower than ever. If this is so, we have found a cycle candidate. The candidate period is the difference between the "age" of z and h, or the "age" of h, or the number of iterations of this algorithm we've performed already. The candidate value is the current value of z (it's probably closer to the attractor cycle than the current h value).
- Otherwise the values of z and h are far away, and we keep looping until they are close.

This algorithm works but it gives many false guesses, and takes much time to make the right guess. The following picture shows how successive iterations converge to a cycle of length 1 (a single point). That means we should have a cycle candidate at every main iteration. Instead, of the first 13 iterations, we have only 3 candidates, and only every 13-th iteration afterwards.

This happens near the points where a higher period bulb is attached to a lower period bulb (in this case, a period 13-bulb attached to the 1-period cardioid). I'll describe a better algorithm in a dedicated section.

The algorithm for the derivative check will confirm that a point will eventually converge to a cycle of given length p. However, cycles, you know, repeat. So the check will also succeed for the same point as before, and period 2p, or 3p, or any other integer multiple of p, since these also make a cycle. So, after a cycle is confirmed, if we want to know the exact period, we need to make sure we've found the right p, not its multiple.

How do we check it? In our example, if the period check fails a few times at the beginning, we'll only get candidate periods that are multiples of 13. So, what we should do is, after a succesful period check, to check for any period that divides the period we've found. We could even simplify that to just performing p additional main iterations (starting with z_{0}) and seeing if we arrive back at z_{0} before p main iterations.

That would work, in theory. But we're working with a computer representation, not with exact numbers. Thus, we'd need some distance limit (given by the working precision) in advance. There's another way that I like better. Remember how the period check works, by looking at the absolute value of the derivative of P_{c}^{οp}(z_{0})? Suppose we evaluate this derivative at all points in the cycle. If there's no value below 1, we know there cannot be a shorter cycle - the check would fail.

However, it does sometimes happen that there's a derivative that is below 1 but it does not make a real cycle. Through experiments, I've found it is possible to rule out this case reliably by checking the whole period:

- For all p' that divides p, check if
- For all n, n≥0, n≤p-p',
- ∂/∂z P
_{c}^{ο(n+p')}(z_{0}) ≤ ∂/∂z P_{c}^{οn}(z_{0})

- ∂/∂z P

- For all n, n≥0, n≤p-p',

We already know it works for p'=p, since then the only thing to check is that ∂/∂z P_{c}^{οp}(z_{0}) ≤

d/dz P_{c}^{ο0}(z_{0}) = 1 .

Once we know the main iterations will converge to a cycle, we know that the point is part of the Mandelbrot set; we also know the cycle exactly (up to the machine precision) thanks to Newton or Laguerre iteration, without needing to evaluate the main iteration many, many times. That gives us enough information to find an interior distance estimate (see Wikipedia).

Of course, if, at any point, the main iterations leave the disc of radius 2, we know the point is outside the Mandelbrot set. Then it's possible to find an exterior distance estimate (see Wikipedia again).

The exterior distance estimate is given by

b=2 ln(|F|) |F|/|Fc|

where F is the value of P_{c}^{οn}(c) and Fc is its derivative with respect to c. The error in this estimate is proportional to 1/|F|, so the n should be high enough for the |F| to be reasonably large. We can see that, as n increases, if |Fc| grows to very high values while |F| stays small, the exterior distance estimate will be very close to 0. This is what happens, most importantly, on the real axis between -2 and -1.401156 (as well as some other points). While the iterations are bound forever to the -2..2 real interval, they never enter a cycle except for a few small subintervals. This way, our algorithm, as described up to this point, would iterate forever, trying to find a cycle or escape the disc of radius 2 around the complex zero. However, I've found through experiments, that the derivative of P_{c}^{οn}(c) with respect to c grows at every iteration. So, I've added the evaluation of the derivative to the main loop, along with a check. If the magnitude of the derivative gets near the largest representable floating point number (I've used 10^{30}), it's clear from the equation for the exterior distance estimate, that the point we're testing is closer than 10^{−30} to the Mandelbrot set. So, these points form a special case I call "boundary". Whether this is really the case is an open question to me.

The Newton-Raphson iterative method for solving an equation f(z)=0 works by evaluating the function F←f(r) and its derivative Fz←f'(r) at a given point r, then plugging these into the following equation:

r_{new} ← r − F/Fz

This gives a new point r_{new}, that is used again in the same equation until the function value F is close enough to zero, or until the r_{new} is very close to r. Here, "close enough" means "with respect to the machine precision".

The method has two drawbacks. First, if f(z)=0 has only complex roots but we start with a real r, a complex root will never be found. It's overcome by a simple countermeasure: if the value F has increased from the last iteration, r is shifted a bit in the complex plane.

The other problem is that it converges very slowly around roots with multiplicity higher than 1. This is commonly overcome by solving f(z)/f'(z)=0 instead of f(z)=0. In this case, the Newton iteration becomes

r_{new} ← r − (F/Fz)/(1−F Fzz/(Fz)^{2})

where F and Fz are as before, and Fzz means the second derivative of f(z) with respect to z, evaluated at r. My implementation switches between the two schemes automatically, choosing the second if both F and Fz are close to 0.

Laguerre iteration is generally considered superior to the Newton iteration, because it converges faster and more reliably. It also requires the evaluation of F, Fz, and Fzz, but it also needs the total expected number of roots, n. For polynomials, it's the degree of the polynomial. Then, it finds two numbers G and H,

G ← Fz/F

H ← G^{2}-Fzz/F

Then,

r_{new} ← r - n/(G±√((n-1)(nH-G^{2})))

where the sign of the square root is taken to maximize the absolute value of the denominator.

It has a similar problem with multiple roots, that I overcome by a home-brew method: first, G and H are as before. Then multiplicity m is estimated as

m ← Round(Re(G^{2}/H))

If m is lower than 1, it is set to 1; if it's higher than n, it's set to n. Then a modified Laguerre iteration follows:

r_{new} ← r - n/(G±√((n-m)(nH-G^{2})/m))

This modified Laguerre method converges very fast in all cases. However, is has an unexpected drawback: it requires the order of the polynomial, n. In our case, for p iterations of z^{2}+c→z, the polynomial has the degree n=2^{p}. For large p, this can exceed the largest floating-point number that the computer can represent.

To use Laguerre for large periods, the equation needs to be rearranged, see the sources.

Besides the derivative check, we also check the numbers for exact match, as described (and refuted) in the outline. Other than that, it's pretty obvious.

The simple algorithm described at the outline is not very good, as has already been explained. Another possibility is to use the derivative d/dz P_{c}^{οn}(z) again. We know that if the absolute value of the derivative is below 1, we have a period candidate. The problem is, if the period check fails a few times due to imprecise interval arithmetic, we'll eventually be checking very high period multiples, and the accumulation of rounding errors may then lead to errors that could have been avoided. However, it's not sufficient to count main iterations between the n's where the derivative falls below 1. For example, I've seen a case where the period was 33 but it only took 4 main iterations for the derivative to fall below 1, then the next iteration was below 1 again, then it wasn't this low for another 28 iterations. Then the cycle repeated. So, the candidate periods were 4, 1, 28, 4, 1, 28, ... but it never found the 33=4+1+28.

My current solution is to remember one candidate point. As the main iteration finds a candidate, I remember the value of the derivative. If it's not lower than ever, I just add the candidate period to an accumulator. Otherwise I check the accumulated period, reset the accumulator, and remember new lowest accumulated derivative.

This has the disadvantage that the accumulated period could be, in the case above, both 4+1+28 or 4+1+28+4, so it's needed to check both the accumulated period and the accumulated plus the candidate. See the source code if you're in doubt what I mean.

This part of the algorithm clearly needs some more research.

Just as described in the outline.

It's simply the evaluation of the equation from the Wikipedia. One thing to note is that sometimes, the result is negative.

That happens if the absolute value of the derivative ∂/∂z P_{c}^{οn}(z_{0}) is above 1. That cannot happen if we use the derivative check as described above. However, if we start at such point, we have an exact match, and the period check succeeds immediately, before checking the derivative.

So, we know the derivative at that point is above 1, so it cannot be an interior point. But we do have a period. What that means is that we've actually found a Misiurewicz point.

The application using the algorithms described here is no longer available due to space limitations on this server.

A highly improved version **can be downloaded here**. It uses experimental techniques that seem to work but don't have proofs.

This version also adds a computation of external angle, external and internal rays, and more coloring options. To some degree, it can also show orbit diagrams and Julia sets.

- Move the mouse around the Mandelbrot to read and see detailed results of evaluating a given point. The circles show distance estimates. The purple chain is an external ray passing through given point; green chain is an internal ray. If "Show Orbit" is checked, you'll see orbits of interior points in the right-hand view.
- Left-click in the Mandelbrot to zoom in.
- Right click to zoom out or move, to choose a Julia set, or trace an external ray.
- Choose from 6 different coloring algorithms:
- All: the interior is blue; the shade is given by the period of the limiting orbit. The exterior is green, according to distance estimate. Red and dark red are boundary points. Unknown pixels are black.
- Classic: the standard coloring algorithm, according to the number of iterations needed to escape. The interior is colored in grey, according to the orbit period.
- Exterior: The exterior is colored according to distance estimate. Interior and unknown pixels are black.
- Interior: the interior is colored according to distance estimate. Exterior and unknown pixels are black.
- Phi: The exterior is colored according to external angle. The computation of the external angle is slow, thus it is only performed when viewing it. The level of detail of angle can be changed using a trackbar to the left.

The interior is colored according to the maaping of the interior to a set of unit discs.

By right-clicking the "Color algo" group, you can see the magnitude of the potential as well; and overlay some information on the interior. You can also change the accuracy of computation:- Naive is a simple algorithm. You could write it yourself.
- 0 is an algorithm taken from the source codes of Mandel 5.0 by Wolf Jung.
- Higher numbers are a limit for the number of iterations. If a given point escapes before "number" iterations, the found external angle will be correct. The computation of "easy" points is not slowed (much) by increasing the limit. However, increasing the limit slows the computation significantly.

- Unknown: The yet-unknown pixels are white. The computation stops when there are less than 10 unknown pixels, or when the "Pause" is checked.

Known interior pixels are red, exterior pixels are blue. The shade is given by the index of iteration that is nearest to Origin (0+0i). The interior pixels may not have the correct shade because of early termination, after a cycle is found and verified.

- Until a Julia set is chosen, the right picture shows orbit diagram of the real axis.
- Points shown in red: origin, C, and alpha (the less repelling fixed point).
- Move the mouse around the Julia set to see
- Lines to the next and after-next iterate
- Line to the next iterate minus C
- Lines to the two preimages
- The 3 points whose after-next iterate lands on the same point as the mouse point

- Left-click to zoom in; right click to zoom out or perform other actions.

I've partially converted the application to Java. The applet is embedded below. The main features are:

- Quick period detection
- Determines exterior and interior distance estimate of a pixel
- Unique rendering scheduler

Things I'm planning to improve:

- The internal rays are not computed.
- The areas near parabolic points (where two bulbs touch) are rendered slowly. I've already improved that in the Delphi version but now I'm short on free time.
- The distance estimates do not work for the Julia fractal but are still painted.
- I'll try to learn Swing and then improve the user interface.

Things that are not gonna change:

- The applet uses a lot of memory.
- It uses only the "double" float, so the maximum zoom level is somewhat limited.

Sources can be downloaded here.

Older Delphi version (contains bugs and is bad overall) sources for Delphi (includes Windows .exe)

Missing plugin?